Diverse data formats and ontologies of task-oriented dialogue (TOD) datasets hinder us from developing general dialogue models that perform well on many datasets and studying knowledge transfer between datasets. To address this issue, we present ConvLab-3, a flexible dialogue system toolkit based on a unified TOD data format. In ConvLab-3, different datasets are transformed into one unified format and loaded by models in the same way. As a result, the cost of adapting a new model or dataset is significantly reduced. Compared to the previous releases of ConvLab (Lee et al., 2019b; Zhu et al., 2020b), ConvLab-3 allows developing dialogue systems with much more datasets and enhances the utility of the reinforcement learning (RL) toolkit for dialogue policies. To showcase the use of ConvLab-3 and inspire future work, we present a comprehensive study with various settings. We show the benefit of pre-training on other datasets for few-shot fine-tuning and RL, and encourage evaluating policy with diverse user simulators.
translated by 谷歌翻译
很难收集足够的缺陷图像来训练工业生产中的深度学习网络。因此,现有的工业异常检测方法更喜欢使用基于CNN的无监督检测和本地化网络来实现此任务。但是,由于传统的端到端网络在高维空间中符合非线性模型的障碍,因此这些方法总是失败。此外,它们通过将正常图像的特征群群群群群群集成,这基本上是导致纹理变化不健壮的。为此,我们提出了基于视觉变压器的(基于VIT)的无监督异常检测网络。它利用层次任务学习和人类经验来增强其解释性。我们的网络包括模式生成和比较网络。模式生成网络使用两个基于VIT的编码器模块来提取两个连续图像贴片的功能,然后使用基于VIT的解码器模块来学习这些功能的人类设计样式并预测第三张图像贴片。之后,我们使用基于暹罗的网络来计算“生成图像补丁”和“原始图像补丁”的相似性。最后,我们通过双向推理策略来完善异常定位。公共数据集MVTEC数据集的比较实验显示我们的方法达到了99.8%的AUC,它超过了先前的最新方法。此外,我们在自己的皮革和布数据集上给出了定性插图。准确的片段结果强烈证明了我们方法在异常检测中的准确性。
translated by 谷歌翻译
在本文中,我们设计和训练生成的图像到文本变压器Git,以统一视觉语言任务,例如图像/视频字幕和问题答案。尽管生成模型在预训练和微调之间提供了一致的网络体系结构,但现有工作通常包含复杂的结构(Uni/多模式编码器/解码器),并取决于外部模块,例如对象检测器/标记器和光学角色识别(OCR) )。在git中,我们将体系结构简化为一个图像编码器,而在单语言建模任务下将架构简化为一个文本解码器。我们还扩展了预训练数据和模型大小,以提高模型性能。没有铃铛和哨子,我们的git在12个具有挑战性的基准下建立了新的艺术状态。例如,我们的模型在文本贴图上首次超过了人类的表现(138.2 vs. 125.5在苹果酒中)。此外,我们提出了一种新的基于一代的图像分类和场景文本识别的方案,在标准基准上实现了不错的表现。
translated by 谷歌翻译
调节软执行器刚度的能力在提高与环境相互作用的效率方面起着至关重要的作用。但是,对于单向刚度调制机制,不能同时保证高侧向刚度和宽范围的弯曲刚度。因此,我们从手指的解剖结构中汲取灵感,提出具有双向可调刚度特性(BTSA)的软执行器。 BTSA由空气式杂种致动(ATA)和骨状结构(BLS)组成。 ATA可以将弯曲刚度从0.2 n/mm调整为0.7 n/mm,约为3.5倍。与无BLS相比,BLS的侧向刚度可增强4.2倍。同时,可以将侧向刚度调节在一定刚度范围内(例如,当弯曲角度为45度时从0.35 N/mm到0.46)。 BLS是根据简化的刚度分析模型设计的。并提出了一种基于蜡的制造方法,以确保气密性。进行有关指尖力,弯曲刚度和侧向刚度的实验以验证特性。
translated by 谷歌翻译
我们提出了DEFR,一种无检测方法,以在图像水平处识别人对象交互(HOI)而不使用对象位置或人类姿势。当探测器是现有方法的一个组成部分时,这是具有挑战性的。在本文中,我们提出了两个调查结果来提高无检测方法的性能,这显着优于辅助现有技术。首先,我们发现它至关重要,可以有效地利用了海上课程之间的语义相关性。可以通过使用Hoi标签的语言嵌入来初始化线性分类器来实现显着的增益,该分类器编码HOI的结构以指导培训。此外,我们提出了Log-Sum-exp符号(LSE-Sign)丢失,以便通过使用SoftMax格式平衡渐变渐变的渐变来促进长尾数据集上的多标签学习。我们的无检测方法实现了65.6地图在Hoi分类上的HICO分类,优于18.5地图的检测辅助状态(SOTA),在一次拍摄类中,52.7地图,超过了SOTA 27.3地图。与以前的工作不同,我们的分类模型(DEFR)可以直接用于HOI检测,而无需任何额外的训练,通过连接到废弃的对象检测器,其边界框输出被转换为DEFR的二进制掩模。令人惊讶的是,这两个解耦模型的这种简单的连接实现了SOTA性能(32.35张图)。
translated by 谷歌翻译
近年来在开发更好的图像标题模型方面取得了巨大进展,但其中大多数依赖于单独的对象探测器来提取区域特征。最近的视觉语言研究通过利用网格表示来实现更灵活的模型训练和更快推理速度的速度来转向探测器趋势。但是,这种发展主要专注于图像理解任务,并且对标题生成任务的研究仍然较少。在本文中,我们涉及一种更好的无需探测器图像标题模型,并提出了一种基于纯视觉变压器的图像标题模型,称为VITCAP,其中使用了网格表示而不提取区域特征。为了提高性能,我们介绍了一种新颖的概念令牌网络(CTN)来预测语义概念,然后将它们纳入端到端的标题。特别地,CTN是基于视觉变换器构建的,并且旨在通过分类任务预测概念令牌,其中包含丰富的语义信息极大地利益标题任务。与以前的探测器的模型相比,Vitcap大大简化了架构,同时在各种具有挑战性的图像标题数据集上实现了竞争性能。特别是,Vitcap分别达到138.1苹果酒分数,即在Nocaps上的Coco-Caption Karpatal-Splity,93.8和108.6苹果酒分数和Google-CC标题数据集上分别达到138.1苹果酒分数。
translated by 谷歌翻译
Practical applications employing deep learning must guarantee inference quality. However, we found that the inference quality of state-of-the-art and state-of-the-practice in practical applications has a long tail distribution. In the real world, many tasks have strict requirements for the quality of deep learning inference, such as safety-critical and mission-critical tasks. The fluctuation of inference quality seriously affects its practical applications, and the quality at the tail may lead to severe consequences. State-of-the-art and state-of-the-practice with outstanding inference quality designed and trained under loose constraints still have poor inference quality under constraints with practical application significance. On the one hand, the neural network models must be deployed on complex systems with limited resources. On the other hand, safety-critical and mission-critical tasks need to meet more metric constraints while ensuring high inference quality. We coin a new term, ``tail quality,'' to characterize this essential requirement and challenge. We also propose a new metric, ``X-Critical-Quality,'' to measure the inference quality under certain constraints. This article reveals factors contributing to the failure of using state-of-the-art and state-of-the-practice algorithms and systems in real scenarios. Therefore, we call for establishing innovative methodologies and tools to tackle this enormous challenge.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, logic language is used as representations of knowledge (facts and rules, more specifically). However, logic language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and incapacity to handle ambiguous input. To this end, we propose a new task, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language. New automatic metrics are also proposed and analysed for the evaluation of this task. With DEER, we investigate a modern approach for inductive reasoning where we use natural language as representation for knowledge instead of logic language and use pretrained language models as ''reasoners''. Moreover, we provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts. We also propose a new framework drawing insights from philosophy literature for this task, which we show in the experiment section that surpasses baselines in both automatic and human evaluations.
translated by 谷歌翻译
Previous computation models either have equivalent abilities in representing all computations but fail to provide primitive operators for programming complex algorithms or lack generalized expression ability to represent newly-added computations. This article presents a unified computation model with generalized expression ability and a concise set of primitive operators for programming high-level algorithms. We propose a unified data abstraction -- Tensor of List, and offer a unified computation model based on Tensor of List, which we call the ToL model (in short, ToL). ToL introduces five atomic computations that can represent any elementary computation by finite composition, ensured with strict formal proof. Based on ToL, we design a pure-functional language -- ToLang. ToLang provides a concise set of primitive operators that can be used to program complex big data and AI algorithms. Our evaluations show ToL has generalized expression ability and a built-in performance indicator, born with a strictly defined computation metric -- elementary operation count (EOPs), consistent with FLOPs within a small error range.
translated by 谷歌翻译